590 research outputs found

    On Verifiable Sufficient Conditions for Sparse Signal Recovery via 1\ell_1 Minimization

    Full text link
    We propose novel necessary and sufficient conditions for a sensing matrix to be "ss-good" - to allow for exact 1\ell_1-recovery of sparse signals with ss nonzero entries when no measurement noise is present. Then we express the error bounds for imperfect 1\ell_1-recovery (nonzero measurement noise, nearly ss-sparse signal, near-optimal solution of the optimization problem yielding the 1\ell_1-recovery) in terms of the characteristics underlying these conditions. Further, we demonstrate (and this is the principal result of the paper) that these characteristics, although difficult to evaluate, lead to verifiable sufficient conditions for exact sparse 1\ell_1-recovery and to efficiently computable upper bounds on those ss for which a given sensing matrix is ss-good. We establish also instructive links between our approach and the basic concepts of the Compressed Sensing theory, like Restricted Isometry or Restricted Eigenvalue properties

    A fast and accurate first-order algorithm for compressed sensing

    Get PDF
    This paper introduces a new, fast and accurate algorithm for solving problems in the area of compressed sensing, and more generally, in the area of signal and image reconstruction from indirect measurements. This algorithm is inspired by recent progress in the development of novel first-order methods in convex optimization, most notably Nesterov’s smoothing technique. In particular, there is a crucial property thatmakes thesemethods extremely efficient for solving compressed sensing problems. Numerical experiments show the promising performance of our method to solve problems which involve the recovery of signals spanning a large dynamic range

    On the linear independence of spikes and sines

    Get PDF
    The purpose of this work is to survey what is known about the linear independence of spikes and sines. The paper provides new results for the case where the locations of the spikes and the frequencies of the sines are chosen at random. This problem is equivalent to studying the spectral norm of a random submatrix drawn from the discrete Fourier transform matrix. The proof involves depends on an extrapolation argument of Bourgain and Tzafriri.Comment: 16 pages, 4 figures. Revision with new proof of major theorem

    Analysis of Basis Pursuit Via Capacity Sets

    Full text link
    Finding the sparsest solution α\alpha for an under-determined linear system of equations Dα=sD\alpha=s is of interest in many applications. This problem is known to be NP-hard. Recent work studied conditions on the support size of α\alpha that allow its recovery using L1-minimization, via the Basis Pursuit algorithm. These conditions are often relying on a scalar property of DD called the mutual-coherence. In this work we introduce an alternative set of features of an arbitrarily given DD, called the "capacity sets". We show how those could be used to analyze the performance of the basis pursuit, leading to improved bounds and predictions of performance. Both theoretical and numerical methods are presented, all using the capacity values, and shown to lead to improved assessments of the basis pursuit success in finding the sparest solution of Dα=sD\alpha=s

    A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank

    Full text link
    Rank minimization has attracted a lot of attention due to its robustness in data recovery. To overcome the computational difficulty, rank is often replaced with nuclear norm. For several rank minimization problems, such a replacement has been theoretically proven to be valid, i.e., the solution to nuclear norm minimization problem is also the solution to rank minimization problem. Although it is easy to believe that such a replacement may not always be valid, no concrete example has ever been found. We argue that such a validity checking cannot be done by numerical computation and show, by analyzing the noiseless latent low rank representation (LatLRR) model, that even for very simple rank minimization problems the validity may still break down. As a by-product, we find that the solution to the nuclear norm minimization formulation of LatLRR is non-unique. Hence the results of LatLRR reported in the literature may be questionable.Comment: accepted by ECML PKDD201

    (k,q)-Compressed Sensing for dMRI with Joint Spatial-Angular Sparsity Prior

    Full text link
    Advanced diffusion magnetic resonance imaging (dMRI) techniques, like diffusion spectrum imaging (DSI) and high angular resolution diffusion imaging (HARDI), remain underutilized compared to diffusion tensor imaging because the scan times needed to produce accurate estimations of fiber orientation are significantly longer. To accelerate DSI and HARDI, recent methods from compressed sensing (CS) exploit a sparse underlying representation of the data in the spatial and angular domains to undersample in the respective k- and q-spaces. State-of-the-art frameworks, however, impose sparsity in the spatial and angular domains separately and involve the sum of the corresponding sparse regularizers. In contrast, we propose a unified (k,q)-CS formulation which imposes sparsity jointly in the spatial-angular domain to further increase sparsity of dMRI signals and reduce the required subsampling rate. To efficiently solve this large-scale global reconstruction problem, we introduce a novel adaptation of the FISTA algorithm that exploits dictionary separability. We show on phantom and real HARDI data that our approach achieves significantly more accurate signal reconstructions than the state of the art while sampling only 2-4% of the (k,q)-space, allowing for the potential of new levels of dMRI acceleration.Comment: To be published in the 2017 Computational Diffusion MRI Workshop of MICCA

    Guaranteed clustering and biclustering via semidefinite programming

    Get PDF
    Identifying clusters of similar objects in data plays a significant role in a wide range of applications. As a model problem for clustering, we consider the densest k-disjoint-clique problem, whose goal is to identify the collection of k disjoint cliques of a given weighted complete graph maximizing the sum of the densities of the complete subgraphs induced by these cliques. In this paper, we establish conditions ensuring exact recovery of the densest k cliques of a given graph from the optimal solution of a particular semidefinite program. In particular, the semidefinite relaxation is exact for input graphs corresponding to data consisting of k large, distinct clusters and a smaller number of outliers. This approach also yields a semidefinite relaxation for the biclustering problem with similar recovery guarantees. Given a set of objects and a set of features exhibited by these objects, biclustering seeks to simultaneously group the objects and features according to their expression levels. This problem may be posed as partitioning the nodes of a weighted bipartite complete graph such that the sum of the densities of the resulting bipartite complete subgraphs is maximized. As in our analysis of the densest k-disjoint-clique problem, we show that the correct partition of the objects and features can be recovered from the optimal solution of a semidefinite program in the case that the given data consists of several disjoint sets of objects exhibiting similar features. Empirical evidence from numerical experiments supporting these theoretical guarantees is also provided

    Adaptive Measurement Network for CS Image Reconstruction

    Full text link
    Conventional compressive sensing (CS) reconstruction is very slow for its characteristic of solving an optimization problem. Convolu- tional neural network can realize fast processing while achieving compa- rable results. While CS image recovery with high quality not only de- pends on good reconstruction algorithms, but also good measurements. In this paper, we propose an adaptive measurement network in which measurement is obtained by learning. The new network consists of a fully-connected layer and ReconNet. The fully-connected layer which has low-dimension output acts as measurement. We train the fully-connected layer and ReconNet simultaneously and obtain adaptive measurement. Because the adaptive measurement fits dataset better, in contrast with random Gaussian measurement matrix, under the same measuremen- t rate, it can extract the information of scene more efficiently and get better reconstruction results. Experiments show that the new network outperforms the original one.Comment: 11pages,8figure

    Toward a unified theory of sparse dimensionality reduction in Euclidean space

    Get PDF
    Let ΦRm×n\Phi\in\mathbb{R}^{m\times n} be a sparse Johnson-Lindenstrauss transform [KN14] with ss non-zeroes per column. For a subset TT of the unit sphere, ε(0,1/2)\varepsilon\in(0,1/2) given, we study settings for m,sm,s required to ensure EΦsupxTΦx221<ε, \mathop{\mathbb{E}}_\Phi \sup_{x\in T} \left|\|\Phi x\|_2^2 - 1 \right| < \varepsilon , i.e. so that Φ\Phi preserves the norm of every xTx\in T simultaneously and multiplicatively up to 1+ε1+\varepsilon. We introduce a new complexity parameter, which depends on the geometry of TT, and show that it suffices to choose ss and mm such that this parameter is small. Our result is a sparse analog of Gordon's theorem, which was concerned with a dense Φ\Phi having i.i.d. Gaussian entries. We qualitatively unify several results related to the Johnson-Lindenstrauss lemma, subspace embeddings, and Fourier-based restricted isometries. Our work also implies new results in using the sparse Johnson-Lindenstrauss transform in numerical linear algebra, classical and model-based compressed sensing, manifold learning, and constrained least squares problems such as the Lasso

    Compressed sensing and robust recovery of low rank matrices

    Get PDF
    In this paper, we focus on compressed sensing and recovery schemes for low-rank matrices, asking under what conditions a low-rank matrix can be sensed and recovered from incomplete, inaccurate, and noisy observations. We consider three schemes, one based on a certain Restricted Isometry Property and two based on directly sensing the row and column space of the matrix. We study their properties in terms of exact recovery in the ideal case, and robustness issues for approximately low-rank matrices and for noisy measurements
    corecore